Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
Ann Surg ; 2022 Jul 07.
Article in English | MEDLINE | ID: covidwho-2294032

ABSTRACT

OBJECTIVE: To develop 2 distinct preoperative and intraoperative risk scores to predict postoperative pancreatic fistula (POPF) after distal pancreatectomy (DP) to improve preventive and mitigation strategies, respectively. BACKGROUND: POPF remains the most common complication after DP. Despite several known risk factors, an adequate risk model has not been developed yet. METHODS: Two prediction risk scores were designed using data of patients undergoing DP in 2 Italian centers (2014-2016) utilizing multivariable logistic regression. The preoperative score (calculated before surgery) aims to facilitate preventive strategies and the intraoperative score (calculated at the end of surgery) aims to facilitate mitigation strategies. Internal validation was achieved using bootstrapping. These data were pooled with data from 5 centers from the United States and the Netherlands (2007-2016) to assess discrimination and calibration in an internal-external validation procedure. RESULTS: Overall, 1336 patients after DP were included, of whom 291 (22%) developed POPF. The preoperative distal fistula risk score (preoperative D-FRS) included 2 variables: pancreatic neck thickness [odds ratio: 1.14; 95% confidence interval (CI): 1.11-1.17 per mm increase] and pancreatic duct diameter (OR: 1.46; 95% CI: 1.32-1.65 per mm increase). The model performed well with an area under the receiver operating characteristic curve of 0.83 (95% CI: 0.78-0.88) and 0.73 (95% CI: 0.70-0.76) upon internal-external validation. Three risk groups were identified: low risk (<10%), intermediate risk (10%-25%), and high risk (>25%) for POPF with 238 (18%), 684 (51%), and 414 (31%) patients, respectively. The intraoperative risk score (intraoperative D-FRS) added body mass index, pancreatic texture, and operative time as variables with an area under the receiver operating characteristic curve of 0.80 (95% CI: 0.74-0.85). CONCLUSIONS: The preoperative and the intraoperative D-FRS are the first validated risk scores for POPF after DP and are readily available at: http://www.pancreascalculator.com. The 3 distinct risk groups allow for personalized treatment and benchmarking.

2.
J Clin Epidemiol ; 154: 75-84, 2023 02.
Article in English | MEDLINE | ID: covidwho-2241601

ABSTRACT

OBJECTIVES: To assess improvement in the completeness of reporting coronavirus (COVID-19) prediction models after the peer review process. STUDY DESIGN AND SETTING: Studies included in a living systematic review of COVID-19 prediction models, with both preprint and peer-reviewed published versions available, were assessed. The primary outcome was the change in percentage adherence to the transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD) reporting guidelines between pre-print and published manuscripts. RESULTS: Nineteen studies were identified including seven (37%) model development studies, two external validations of existing models (11%), and 10 (53%) papers reporting on both development and external validation of the same model. Median percentage adherence among preprint versions was 33% (min-max: 10 to 68%). The percentage adherence of TRIPOD components increased from preprint to publication in 11/19 studies (58%), with adherence unchanged in the remaining eight studies. The median change in adherence was just 3 percentage points (pp, min-max: 0-14 pp) across all studies. No association was observed between the change in percentage adherence and preprint score, journal impact factor, or time between journal submission and acceptance. CONCLUSIONS: The preprint reporting quality of COVID-19 prediction modeling studies is poor and did not improve much after peer review, suggesting peer review had a trivial effect on the completeness of reporting during the pandemic.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , Prognosis , Pandemics
3.
Lancet Digit Health ; 4(12): e853-e855, 2022 Dec.
Article in English | MEDLINE | ID: covidwho-2229344
4.
Trials ; 23(1): 242, 2022 Mar 29.
Article in English | MEDLINE | ID: covidwho-2079532

ABSTRACT

BACKGROUND: The rapidly increasing number of elderly (≥ 65 years old) with TBI is accompanied by substantial medical and economic consequences. An ASDH is the most common injury in elderly with TBI and the surgical versus conservative treatment of this patient group remains an important clinical dilemma. Current BTF guidelines are not based on high-quality evidence and compliance is low, allowing for large international treatment variation. The RESET-ASDH trial is an international multicenter RCT on the (cost-)effectiveness of early neurosurgical hematoma evacuation versus initial conservative treatment in elderly with a t-ASDH METHODS: In total, 300 patients will be recruited from 17 Belgian and Dutch trauma centers. Patients ≥ 65 years with at first presentation a GCS ≥ 9 and a t-ASDH > 10 mm or a t-ASDH < 10 mm and a midline shift > 5 mm, or a GCS < 9 with a traumatic ASDH < 10 mm and a midline shift < 5 mm without extracranial explanation for the comatose state, for whom clinical equipoise exists will be randomized to early surgical hematoma evacuation or initial conservative management with the possibility of delayed secondary surgery. When possible, patients or their legal representatives will be asked for consent before inclusion. When obtaining patient or proxy consent is impossible within the therapeutic time window, patients are enrolled using the deferred consent procedure. Medical-ethical approval was obtained in the Netherlands and Belgium. The choice of neurosurgical techniques will be left to the discretion of the neurosurgeon. Patients will be analyzed according to an intention-to-treat design. The primary endpoint will be functional outcome on the GOS-E after 1 year. Patient recruitment starts in 2022 with the exact timing depending on the current COVID-19 crisis and is expected to end in 2024. DISCUSSION: The study results will be implemented after publication and presented on international conferences. Depending on the trial results, the current Brain Trauma Foundation guidelines will either be substantiated by high-quality evidence or will have to be altered. TRIAL REGISTRATION: Nederlands Trial Register (NTR), Trial NL9012 . CLINICALTRIALS: gov, Trial NCT04648436 .


Subject(s)
Brain Injuries, Traumatic , COVID-19 , Hematoma, Subdural, Acute , Aged , Hematoma, Subdural, Acute/diagnosis , Hematoma, Subdural, Acute/surgery , Humans , Multicenter Studies as Topic , Neurosurgical Procedures , Randomized Controlled Trials as Topic , Trauma Centers
5.
BMC Med Res Methodol ; 22(1): 35, 2022 01 30.
Article in English | MEDLINE | ID: covidwho-1699687

ABSTRACT

BACKGROUND: We investigated whether we could use influenza data to develop prediction models for COVID-19 to increase the speed at which prediction models can reliably be developed and validated early in a pandemic. We developed COVID-19 Estimated Risk (COVER) scores that quantify a patient's risk of hospital admission with pneumonia (COVER-H), hospitalization with pneumonia requiring intensive services or death (COVER-I), or fatality (COVER-F) in the 30-days following COVID-19 diagnosis using historical data from patients with influenza or flu-like symptoms and tested this in COVID-19 patients. METHODS: We analyzed a federated network of electronic medical records and administrative claims data from 14 data sources and 6 countries containing data collected on or before 4/27/2020. We used a 2-step process to develop 3 scores using historical data from patients with influenza or flu-like symptoms any time prior to 2020. The first step was to create a data-driven model using LASSO regularized logistic regression, the covariates of which were used to develop aggregate covariates for the second step where the COVER scores were developed using a smaller set of features. These 3 COVER scores were then externally validated on patients with 1) influenza or flu-like symptoms and 2) confirmed or suspected COVID-19 diagnosis across 5 databases from South Korea, Spain, and the United States. Outcomes included i) hospitalization with pneumonia, ii) hospitalization with pneumonia requiring intensive services or death, and iii) death in the 30 days after index date. RESULTS: Overall, 44,507 COVID-19 patients were included for model validation. We identified 7 predictors (history of cancer, chronic obstructive pulmonary disease, diabetes, heart disease, hypertension, hyperlipidemia, kidney disease) which combined with age and sex discriminated which patients would experience any of our three outcomes. The models achieved good performance in influenza and COVID-19 cohorts. For COVID-19 the AUC ranges were, COVER-H: 0.69-0.81, COVER-I: 0.73-0.91, and COVER-F: 0.72-0.90. Calibration varied across the validations with some of the COVID-19 validations being less well calibrated than the influenza validations. CONCLUSIONS: This research demonstrated the utility of using a proxy disease to develop a prediction model. The 3 COVER models with 9-predictors that were developed using influenza data perform well for COVID-19 patients for predicting hospitalization, intensive services, and fatality. The scores showed good discriminatory performance which transferred well to the COVID-19 population. There was some miscalibration in the COVID-19 validations, which is potentially due to the difference in symptom severity between the two diseases. A possible solution for this is to recalibrate the models in each location before use.


Subject(s)
COVID-19 , Influenza, Human , Pneumonia , COVID-19 Testing , Humans , Influenza, Human/epidemiology , SARS-CoV-2 , United States
6.
BMC Health Serv Res ; 21(1): 957, 2021 Sep 13.
Article in English | MEDLINE | ID: covidwho-1405306

ABSTRACT

BACKGROUND: The novel coronavirus SARS-19 produces 'COVID-19' in patients with symptoms. COVID-19 patients admitted to the hospital require early assessment and care including isolation. The National Early Warning Score (NEWS) and its updated version NEWS2 is a simple physiological scoring system used in hospitals, which may be useful in the early identification of COVID-19 patients. We investigate the performance of multiple enhanced NEWS2 models in predicting the risk of COVID-19. METHODS: Our cohort included unplanned adult medical admissions discharged over 3 months (11 March 2020 to 13 June 2020 ) from two hospitals (YH for model development; SH for external model validation). We used logistic regression to build multiple prediction models for the risk of COVID-19 using the first electronically recorded NEWS2 within ± 24 hours of admission. Model M0' included NEWS2; model M1' included NEWS2 + age + sex, and model M2' extends model M1' with subcomponents of NEWS2 (including diastolic blood pressure + oxygen flow rate + oxygen scale). Model performance was evaluated according to discrimination (c statistic), calibration (graphically), and clinical usefulness at NEWS2 ≥ 5. RESULTS: The prevalence of COVID-19 was higher in SH (11.0 %=277/2520) than YH (8.7 %=343/3924) with a higher first NEWS2 scores ( SH 3.2 vs YH 2.8) but similar in-hospital mortality (SH 8.4 % vs YH 8.2 %). The c-statistics for predicting the risk of COVID-19 for models M0',M1',M2' in the development dataset were: M0': 0.71 (95 %CI 0.68-0.74); M1': 0.67 (95 %CI 0.64-0.70) and M2': 0.78 (95 %CI 0.75-0.80)). For the validation datasets the c-statistics were: M0' 0.65 (95 %CI 0.61-0.68); M1': 0.67 (95 %CI 0.64-0.70) and M2': 0.72 (95 %CI 0.69-0.75) ). The calibration slope was similar across all models but Model M2' had the highest sensitivity (M0' 44 % (95 %CI 38-50 %); M1' 53 % (95 %CI 47-59 %) and M2': 57 % (95 %CI 51-63 %)) and specificity (M0' 75 % (95 %CI 73-77 %); M1' 72 % (95 %CI 70-74 %) and M2': 76 % (95 %CI 74-78 %)) for the validation dataset at NEWS2 ≥ 5. CONCLUSIONS: Model M2' appears to be reasonably accurate for predicting the risk of COVID-19. It may be clinically useful as an early warning system at the time of admission especially to triage large numbers of unplanned hospital admissions.


Subject(s)
COVID-19 , Early Warning Score , Adult , Hospitals , Humans , Patient Admission , Retrospective Studies , SARS-CoV-2
7.
Crit Care Explor ; 3(5): e0402, 2021 May.
Article in English | MEDLINE | ID: covidwho-1254873

ABSTRACT

BACKGROUND: Acute respiratory failure occurs frequently in hospitalized patients and often begins outside the ICU, associated with increased length of stay, cost, and mortality. Delays in decompensation recognition are associated with worse outcomes. OBJECTIVES: The objective of this study is to predict acute respiratory failure requiring any advanced respiratory support (including noninvasive ventilation). With the advent of the coronavirus disease pandemic, concern regarding acute respiratory failure has increased. DERIVATION COHORT: All admission encounters from January 2014 to June 2017 from three hospitals in the Emory Healthcare network (82,699). VALIDATION COHORT: External validation cohort: all admission encounters from January 2014 to June 2017 from a fourth hospital in the Emory Healthcare network (40,143). Temporal validation cohort: all admission encounters from February to April 2020 from four hospitals in the Emory Healthcare network coronavirus disease tested (2,564) and coronavirus disease positive (389). PREDICTION MODEL: All admission encounters had vital signs, laboratory, and demographic data extracted. Exclusion criteria included invasive mechanical ventilation started within the operating room or advanced respiratory support within the first 8 hours of admission. Encounters were discretized into hour intervals from 8 hours after admission to discharge or advanced respiratory support initiation and binary labeled for advanced respiratory support. Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment, our eXtreme Gradient Boosting-based algorithm, was compared against Modified Early Warning Score. RESULTS: Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment had significantly better discrimination than Modified Early Warning Score (area under the receiver operating characteristic curve 0.85 vs 0.57 [test], 0.84 vs 0.61 [external validation]). Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment maintained a positive predictive value (0.31-0.21) similar to that of Modified Early Warning Score greater than 4 (0.29-0.25) while identifying 6.62 (validation) to 9.58 (test) times more true positives. Furthermore, Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment performed more effectively in temporal validation (area under the receiver operating characteristic curve 0.86 [coronavirus disease tested], 0.93 [coronavirus disease positive]), while achieving identifying 4.25-4.51× more true positives. CONCLUSIONS: Prediction of Acute Respiratory Failure requiring advanced respiratory support in Advance of Interventions and Treatment is more effective than Modified Early Warning Score in predicting respiratory failure requiring advanced respiratory support at external validation and in coronavirus disease 2019 patients. Silent prospective validation necessary before local deployment.

8.
JMIR Med Inform ; 9(4): e21547, 2021 Apr 05.
Article in English | MEDLINE | ID: covidwho-1195972

ABSTRACT

BACKGROUND: SARS-CoV-2 is straining health care systems globally. The burden on hospitals during the pandemic could be reduced by implementing prediction models that can discriminate patients who require hospitalization from those who do not. The COVID-19 vulnerability (C-19) index, a model that predicts which patients will be admitted to hospital for treatment of pneumonia or pneumonia proxies, has been developed and proposed as a valuable tool for decision-making during the pandemic. However, the model is at high risk of bias according to the "prediction model risk of bias assessment" criteria, and it has not been externally validated. OBJECTIVE: The aim of this study was to externally validate the C-19 index across a range of health care settings to determine how well it broadly predicts hospitalization due to pneumonia in COVID-19 cases. METHODS: We followed the Observational Health Data Sciences and Informatics (OHDSI) framework for external validation to assess the reliability of the C-19 index. We evaluated the model on two different target populations, 41,381 patients who presented with SARS-CoV-2 at an outpatient or emergency department visit and 9,429,285 patients who presented with influenza or related symptoms during an outpatient or emergency department visit, to predict their risk of hospitalization with pneumonia during the following 0-30 days. In total, we validated the model across a network of 14 databases spanning the United States, Europe, Australia, and Asia. RESULTS: The internal validation performance of the C-19 index had a C statistic of 0.73, and the calibration was not reported by the authors. When we externally validated it by transporting it to SARS-CoV-2 data, the model obtained C statistics of 0.36, 0.53 (0.473-0.584) and 0.56 (0.488-0.636) on Spanish, US, and South Korean data sets, respectively. The calibration was poor, with the model underestimating risk. When validated on 12 data sets containing influenza patients across the OHDSI network, the C statistics ranged between 0.40 and 0.68. CONCLUSIONS: Our results show that the discriminative performance of the C-19 index model is low for influenza cohorts and even worse among patients with COVID-19 in the United States, Spain, and South Korea. These results suggest that C-19 should not be used to aid decision-making during the COVID-19 pandemic. Our findings highlight the importance of performing external validation across a range of settings, especially when a prediction model is being extrapolated to a different population. In the field of prediction, extensive validation is required to create appropriate trust in a model.

9.
JACC CardioOncol ; 2(3): 411-413, 2020 Sep.
Article in English | MEDLINE | ID: covidwho-888591
10.
BMJ ; 369: m1328, 2020 04 07.
Article in English | MEDLINE | ID: covidwho-648504

ABSTRACT

OBJECTIVE: To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease. DESIGN: Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. DATA SOURCES: PubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020. STUDY SELECTION: Studies that developed or validated a multivariable covid-19 related prediction model. DATA EXTRACTION: At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). RESULTS: 37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models. CONCLUSION: Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. SYSTEMATIC REVIEW REGISTRATION: Protocol https://osf.io/ehc47/, registration https://osf.io/wy245. READERS' NOTE: This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.


Subject(s)
Coronavirus Infections/diagnosis , Models, Theoretical , Pneumonia, Viral/diagnosis , COVID-19 , Coronavirus , Disease Progression , Hospitalization/statistics & numerical data , Humans , Multivariate Analysis , Pandemics , Prognosis
SELECTION OF CITATIONS
SEARCH DETAIL